448 research outputs found

    Do spillover benefits grow with rising foreign direct investment? An empirical examination of the case of China

    Full text link
    Using data for Chinese manufacturing industry for 2001, this paper examines the impacts of foreign presence on the performance of locally-owned Chinese firms. Our key result supports a curvilinear functional form. Foreign penetration rates in excess of just about two third of industrial capital are associated with declining spillover benefits, indicating the dominance of negative spillovers. The curvilinear relationship is found to be particularly strong in labour-intensive industries, contrasting a standard linear relationship in technology-intensive sectors. The finding of the complexity of spillover effects challenges the laissez-faire view that ‘the more inward FDI, the better’ and that inward FDI into all types of domestic industry is equally valuable, in terms of performance benefits. Our findings argue for policy measures to strengthen domestically-owned Chinese industry, to provide effective competition to foreign firms and to absorb the benefits from spillovers more effectively

    Functional diversity of CTCFs is encoded in their binding motifs

    Get PDF
    CTCF ChIP-seq data. Cell lines and statistics for the ChIP-seq data used in the study. (DOCX 55 kb

    Subnational institutions and open innovation: evidence from China

    Get PDF
    Purpose: The purpose of this paper is to examine how subnational institutions within a country explain the performance consequences of open innovation (OI) in emerging market enterprises (EMEs). Design/methodology/approach: The paper conducts a regression analysis by using a novel panel data set comprising of 438 innovative Chinese firms over the period of 2008-2011. Findings: The authors show that although on average openness to external actors improves innovation performance this effect is pronounced for EMEs that operate in subnational regions with a higher level of intellectual property rights (IPR) enforcement and of factor market development. The findings point to the context-dependent nature of OI strategy and the complementary effect of institutional parameters in emerging markets and help to reconcile the contrasting findings regarding the effect of OI in the prior literature. Originality/value: This paper extends the literature on OI by suggesting that the analysis of the performance consequences of OI strategy should go beyond the nexus between OI and firm performance, and instead, focus on subnational-specific institutions, such as region-specific IPR enforcement, factor market development and intermediation market development, that may facilitate or constrain the effect of OI model

    BLEURT Has Universal Translations: An Analysis of Automatic Metrics by Minimum Risk Training

    Full text link
    Automatic metrics play a crucial role in machine translation. Despite the widespread use of n-gram-based metrics, there has been a recent surge in the development of pre-trained model-based metrics that focus on measuring sentence semantics. However, these neural metrics, while achieving higher correlations with human evaluations, are often considered to be black boxes with potential biases that are difficult to detect. In this study, we systematically analyze and compare various mainstream and cutting-edge automatic metrics from the perspective of their guidance for training machine translation systems. Through Minimum Risk Training (MRT), we find that certain metrics exhibit robustness defects, such as the presence of universal adversarial translations in BLEURT and BARTScore. In-depth analysis suggests two main causes of these robustness deficits: distribution biases in the training datasets, and the tendency of the metric paradigm. By incorporating token-level constraints, we enhance the robustness of evaluation metrics, which in turn leads to an improvement in the performance of machine translation systems. Codes are available at \url{https://github.com/powerpuffpomelo/fairseq_mrt}.Comment: Accepted to ACL 2023 main conferenc

    Finding Sparse Structures for Domain Specific Neural Machine Translation

    Full text link
    Neural machine translation often adopts the fine-tuning approach to adapt to specific domains. However, nonrestricted fine-tuning can easily degrade on the general domain and over-fit to the target domain. To mitigate the issue, we propose Prune-Tune, a novel domain adaptation method via gradual pruning. It learns tiny domain-specific sub-networks during fine-tuning on new domains. Prune-Tune alleviates the over-fitting and the degradation problem without model modification. Furthermore, Prune-Tune is able to sequentially learn a single network with multiple disjoint domain-specific sub-networks for multiple domains. Empirical experiment results show that Prune-Tune outperforms several strong competitors in the target domain test set without sacrificing the quality on the general domain in both single and multi-domain settings. The source code and data are available at https://github.com/ohlionel/Prune-Tune.Comment: Accepted to AAAI 202

    GigaST: A 10,000-hour Pseudo Speech Translation Corpus

    Full text link
    This paper introduces GigaST, a large-scale pseudo speech translation (ST) corpus. We create the corpus by translating the text in GigaSpeech, an English ASR corpus, into German and Chinese. The training set is translated by a strong machine translation system and the test set is translated by human. ST models trained with an addition of our corpus obtain new state-of-the-art results on the MuST-C English-German benchmark test set. We provide a detailed description of the translation process and verify its quality. We make the translated text data public and hope to facilitate research in speech translation. Additionally, we also release the training scripts on NeurST to make it easy to replicate our systems. GigaST dataset is available at https://st-benchmark.github.io/resources/GigaST.Comment: Submitted to Interspeech 2022. GigaST dataset is available at https://st-benchmark.github.io/resources/GigaS
    • …
    corecore